213 research outputs found

    Multivariate texture discrimination based on geodesics to class centroids on a generalized Gaussian Manifold

    Get PDF
    A texture discrimination scheme is proposed wherein probability distributions are deployed on a probabilistic manifold for modeling the wavelet statistics of images. We consider the Rao geodesic distance (GD) to the class centroid for texture discrimination in various classification experiments. We compare the performance of GD to class centroid with the Euclidean distance in a similar context, both in terms of accuracy and computational complexity. Also, we compare our proposed classification scheme with the k-nearest neighbor algorithm. Univariate and multivariate Gaussian and Laplace distributions, as well as generalized Gaussian distributions with variable shape parameter are each evaluated as a statistical model for the wavelet coefficients. The GD to the centroid outperforms the Euclidean distance and yields superior discrimination compared to the k-nearest neighbor approach

    A hybrid method for accurate iris segmentation on at-a-distance visible-wavelength images

    Full text link
    [EN] This work describes a new hybrid method for accurate iris segmentation from full-face images independently of the ethnicity of the subject. It is based on a combination of three methods: facial key-point detection, integro-differential operator (IDO) and mathematical morphology. First, facial landmarks are extracted by means of the Chehra algorithm in order to obtain the eye location. Then, the IDO is applied to the extracted sub-image containing only the eye in order to locate the iris. Once the iris is located, a series of mathematical morphological operations is performed in order to accurately segment it. Results are obtained and compared among four different ethnicities (Asian, Black, Latino and White) as well as with two other iris segmentation algorithms. In addition, robustness against rotation, blurring and noise is also assessed. Our method obtains state-of-the-art performance and shows itself robust with small amounts of blur, noise and/or rotation. Furthermore, it is fast, accurate, and its code is publicly available.Fuentes-Hurtado, FJ.; Naranjo Ornedo, V.; Diego-Mas, JA.; Alcañiz Raya, ML. (2019). A hybrid method for accurate iris segmentation on at-a-distance visible-wavelength images. EURASIP Journal on Image and Video Processing (Online). 2019(1):1-14. https://doi.org/10.1186/s13640-019-0473-0S11420191A. Radman, K. Jumari, N. Zainal, Fast and reliable iris segmentation algorithm. IET Image Process.7(1), 42–49 (2013).M. Erbilek, M. Fairhurst, M. C. D. C Abreu, in 5th International Conference on Imaging for Crime Detection and Prevention (ICDP 2013). Age prediction from iris biometrics (London, 2013), pp. 1–5. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6913712&isnumber=6867223 .A. Abbasi, M. Khan, Iris-pupil thickness based method for determining age group of a person. Int. Arab J. Inf. Technol. (IAJIT). 13(6) (2016).G. Mabuza-Hocquet, F. Nelwamondo, T. Marwala, in Intelligent Information and Database Systems. ed. by N. Nguyen, S. Tojo, L. Nguyen, B. Trawiński. Ethnicity Distinctiveness Through Iris Texture Features Using Gabor Filters. ACIIDS 2017. Lecture Notes in Computer Science, vol. 10192 (Springer, Cham, 2017).S. Lagree, K. W. Bowyer, in 2011 IEEE International Conference on Technologies for Homeland Security (HST). Predicting ethnicity and gender from iris texture (IEEEWaltham, 2011). p. 440–445. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6107909&isnumber=6107829 .J. G. Daugman, High confidence visual recognition of persons by a test of statistical independence. IEEE Trans. Pattern Anal. Mach. Intell.15(11), 1148–1161 (1993).N. Kourkoumelis, M. Tzaphlidou. Medical Safety Issues Concerning the Use of Incoherent Infrared Light in Biometrics, eds. A. Kumar, D. Zhang. Ethics and Policy of Biometrics. ICEB 2010. Lecture Notes in Computer Science, vol 6005 (Springer, Berlin, Heidelberg, 2010).R. P. Wildes, Iris recognition: an emerging biometric technology. Proc. IEEE. 85(9), 1348–1363 (1997).M. Kass, A. Witkin, D. Terzopoulos, Snakes: Active contour models. Int. J. Comput. Vision. 1(4), 321–331 (1988).S. J. Pundlik, D. L. Woodard, S. T. Birchfield, in 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. Non-ideal iris segmentation using graph cuts (IEEEAnchorage, 2008). p. 1–6. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4563108&isnumber=4562948 .H. Proença, Iris recognition: On the segmentation of degraded images acquired in the visible wavelength. IEEE Trans. Pattern Anal. Mach. Intell.32(8), 1502–1516 (2010). http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5156505&isnumber=5487331 .T. Tan, Z. He, Z. Sun, Efficient and robust segmentation of noisy iris images for non-cooperative iris recognition. Image Vision Comput.28(2), 223–230 (2010).C. -W. Tan, A. Kumar, in CVPR 2011 WORKSHOPS. Automated segmentation of iris images using visible wavelength face images (Colorado Springs, 2011). p. 9–14. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=5981682&isnumber=5981671 .Y. -H. Li, M. Savvides, An automatic iris occlusion estimation method based on high-dimensional density estimation. IEEE Trans. Pattern Anal. Mach. Intell.35(4), 784–796 (2013).M. Yahiaoui, E. Monfrini, B. Dorizzi, Markov chains for unsupervised segmentation of degraded nir iris images for person recognition. Pattern Recogn. Lett.82:, 116–123 (2016).A. Radman, N. Zainal, S. A. Suandi, Automated segmentation of iris images acquired in an unconstrained environment using hog-svm and growcut. Digit. Signal Proc.64:, 60–70 (2017).N. Liu, H. Li, M. Zhang, J. Liu, Z. Sun, T. Tan, in 2016 International Conference on Biometrics (ICB). Accurate iris segmentation in non-cooperative environments using fully convolutional networks (Halmstad, 2016). p. 1–8. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7550055&isnumber=7550036 .Z. Zhao, A. Kumar, in 2017 IEEE International Conference on Computer Vision (ICCV). Towards more accurate iris recognition using deeply learned spatially corresponding features (Venice, 2017). p. 3829–3838. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=8237673&isnumber=8237262 .P. Li, X. Liu, L. Xiao, Q. Song, Robust and accurate iris segmentation in very noisy iris images. Image Vision Comput.28(2), 246–253 (2010).D. S. Jeong, J. W. Hwang, B. J. Kang, K. R. Park, C. S. Won, D. -K. Park, J. Kim, A new iris segmentation method for non-ideal iris images. Image Vision Comput.28(2), 254–260 (2010).Y. Chen, M. Adjouadi, C. Han, J. Wang, A. Barreto, N. Rishe, J. Andrian, A highly accurate and computationally efficient approach for unconstrained iris segmentation. Image Vision Comput. 28(2), 261–269 (2010).Z. Zhao, A. Kumar, in 2015 IEEE International Conference on Computer Vision (ICCV). An accurate iris segmentation framework under relaxed imaging constraints using total variation model (Santiago, 2015). p. 3828–3836. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7410793&isnumber=7410356 .Y. Hu, K. Sirlantzis, G. Howells, Improving colour iris segmentation using a model selection technique. Pattern Recogn. Lett.57:, 24–32 (2015).E. Ouabida, A. Essadique, A. Bouzid, Vander lugt correlator based active contours for iris segmentation and tracking. Expert Systems Appl.71:, 383–395 (2017).C. -W. Tan, A. Kumar, Unified framework for automated iris segmentation using distantly acquired face images. IEEE Trans. Image Proc.21(9), 4068–4079 (2012).C. -W. Tan, A. Kumar, in Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012). Human identification from at-a-distance images by simultaneously exploiting iris and periocular features (Tsukuba, 2012). p. 553–556. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6460194&isnumber=6460043 .C. -W. Tan, A. Kumar, Towards online iris and periocular recognition under relaxed imaging constraints. IEEE Trans. Image Proc.22(10), 3751–3765 (2013).K. Y. Shin, Y. G. Kim, K. R. Park, Enhanced iris recognition method based on multi-unit iris images. Opt. Eng.52(4), 047201–047201 (2013).CASIA iris databases. http://biometrics.idealtest.org/ . Accessed 06 Sept 2017.WVU iris databases. hhttp://biic.wvu.edu/data-sets/synthetic-iris-dataset . Accessed 06 Sept 2017.UBIRIS iris database. http://iris.di.ubi.pt . Accessed 06 Sept 2017.MICHE iris database. http://biplab.unisa.it/MICHE/ . Accessed 06 Sept 2017.P. J. Phillips, et al, in 2005 IEEE Computer Society Conference on Computer Vision and Pattern Recognition (CVPR’05), 1. Overview of the face recognition grand challenge (San Diego, 2005). p. 947–954. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=1467368&isnumber=31472 .D. S. Ma, J. Correll, B. Wittenbrink, The chicago face database: A free stimulus set of faces and norming data. Behav. Res. Methods. 47(4), 1122–1135 (2015).P. Soille, Morphological Image Analysis: Principles and Applications (Springer, 2013).A. K. Jain, Fundamentals of Digital Image Processing (Prentice-Hall, Inc., Englewood Cliffs, 1989).J. Daugman, How iris recognition works. IEEE Trans. Circ. Syst. Video Technol.14(1), 21–30 (2004).A. Asthana, S. Zafeiriou, S. Cheng, M. Pantic, in 2014 IEEE Conference on Computer Vision and Pattern Recognition. Incremental face alignment in the wild (Columbus, 2014). p. 1859–1866. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6909636&isnumber=6909393 .T. Baltrusaitis, P. Robinson, L. -P. Morency, in 2013 IEEE International Conference on Computer Vision Workshops. Constrained local neural fields for robust facial landmark detection in the wild (Sydney, 2013). p. 354–361. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6755919&isnumber=6755862 .X. Zhu, D. Ramanan, in Computer Vision and Pattern Recognition (CVPR), 2012 IEEE Conference On. Face detection, pose estimation, and landmark localization in the wild (IEEEBerlin Heidelberg, 2012), pp. 2879–2886.G. Tzimiropoulos, in 2015 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Project-out cascaded regression with an application to face alignment (Boston, 2015). p. 3659–3667. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=7298989&isnumber=7298593 .H. Hofbauer, F. Alonso-Fernandez, P. Wild, J. Bigun, A. Uhl, in 2014 22nd International Conference on Pattern Recognition. A ground truth for iris segmentation (Stockholm, 2014). p. 527–532. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=6976811&isnumber=6976709 .H. Proença, L. A. Alexandre, in 2007 First IEEE International Conference on Biometrics: Theory, Applications, and Systems. The NICE.I: Noisy Iris Challenge Evaluation - Part I (Crystal City, 2007). p. 1–4. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=4401910&isnumber=4401902 .J. Daugman, in European Convention on Security and Detection. High confidence recognition of persons by rapid video analysis of iris texture, (1995). p. 244–251. http://ieeexplore.ieee.org/stamp/stamp.jsp?tp=&arnumber=491729&isnumber=10615 .Code of Matlab implementation of Daugman’s integro-differential operator (IDO). https://es.mathworks.com/matlabcentral/fileexchange/15652-iris-segmentation-using-daugman-s-integrodifferential-operator/ . Accessed 06 Sept 2017.Code of Matlab implementation of Zhao and Kumar’s iris segmentation framework under relaxed imaging constraints using total variation model. http://www4.comp.polyu.edu.hk/~csajaykr/tvmiris.htm/ . Accessed 06 Sept 2017.Code of Matlab implementation of presented work. https://gitlab.com/ffuentes/hybrid_iris_segmentation/ . Accessed 06 Sept 2017.Face and eye detection with OpenCV. https://docs.opencv.org/trunk/d7/d8b/tutorial_py_face_detection.html . Accessed 07 Sept 2018.A. K. Boyat, B. K. Joshi, 6. A review paper:noise models in digital image processing signal & image processing. An International Journal (SIPIJ), (2015), pp. 63–75. https://doi.org/10.5121/sipij.2015.6206 .A. Buades, Y. Lou, J. M. Morel, Z. Tang, Multi image noise estimation and denoising (2010). Available: https://hal.archives-ouvertes.fr/hal-00510866/

    Modelling Visual Search with the Selective Attention for Identification Model (VS-SAIM): A Novel Explanation for Visual Search Asymmetries

    Get PDF
    In earlier work, we developed the Selective Attention for Identification Model (SAIM [16]). SAIM models the human ability to perform translation-invariant object identification in multiple object scenes. SAIM suggests that central for this ability is an interaction between parallel competitive processes in a selection stage and a object identification stage. In this paper, we applied the model to visual search experiments involving simple lines and letters. We presented successful simulation results for asymmetric and symmetric searches and for the influence of background line orientations. Search asymmetry refers to changes in search performance when the roles of target item and non-target item (distractor) are swapped. In line with other models of visual search, the results suggest that a large part of the empirical evidence can be explained by competitive processes in the brain, which are modulated by the similarity between target and distractor. The simulations also suggest that another important factor is the feature properties of distractors. Finally, the simulations indicate that search asymmetries can be the outcome of interactions between top-down (knowledge about search items) and bottom-up (feature of search items) processing. This interaction in VS-SAIM is dominated by a novel mechanism, the knowledge-based on-centre-off-surround receptive field. This receptive field is reminiscent of the classical receptive fields but the exact shape is modulated by both, top-down and bottom-up processes. The paper discusses supporting evidence for the existence of this novel concept

    Measures in Visualization Space

    Get PDF
    Postponed access: the file will be available after 2021-08-12Measurement is an integral part of modern science, providing the fundamental means for evaluation, comparison, and prediction. In the context of visualization, several different types of measures have been proposed, ranging from approaches that evaluate particular aspects of visualization techniques, their perceptual characteristics, and even economic factors. Furthermore, there are approaches that attempt to provide means for measuring general properties of the visualization process as a whole. Measures can be quantitative or qualitative, and one of the primary goals is to provide objective means for reasoning about visualizations and their effectiveness. As such, they play a central role in the development of scientific theories for visualization. In this chapter, we provide an overview of the current state of the art, survey and classify different types of visualization measures, characterize their strengths and drawbacks, and provide an outline of open challenges for future research.acceptedVersio

    Automatic Robust Neurite Detection and Morphological Analysis of Neuronal Cell Cultures in High-content Screening

    Get PDF
    Cell-based high content screening (HCS) is becoming an important and increasingly favored approach in therapeutic drug discovery and functional genomics. In HCS, changes in cellular morphology and biomarker distributions provide an information-rich profile of cellular responses to experimental treatments such as small molecules or gene knockdown probes. One obstacle that currently exists with such cell-based assays is the availability of image processing algorithms that are capable of reliably and automatically analyzing large HCS image sets. HCS images of primary neuronal cell cultures are particularly challenging to analyze due to complex cellular morphology. Here we present a robust method for quantifying and statistically analyzing the morphology of neuronal cells in HCS images. The major advantages of our method over existing software lie in its capability to correct non-uniform illumination using the contrast-limited adaptive histogram equalization method; segment neuromeres using Gabor-wavelet texture analysis; and detect faint neurites by a novel phase-based neurite extraction algorithm that is invariant to changes in illumination and contrast and can accurately localize neurites. Our method was successfully applied to analyze a large HCS image set generated in a morphology screen for polyglutaminemediated neuronal toxicity using primary neuronal cell cultures derived from embryos of a Drosophila Huntington’s Disease (HD) model.National Institutes of Health (U.S.) (Grant

    Molecular Predictors of 3D Morphogenesis by Breast Cancer Cell Lines in 3D Culture

    Get PDF
    Correlative analysis of molecular markers with phenotypic signatures is the simplest model for hypothesis generation. In this paper, a panel of 24 breast cell lines was grown in 3D culture, their morphology was imaged through phase contrast microscopy, and computational methods were developed to segment and represent each colony at multiple dimensions. Subsequently, subpopulations from these morphological responses were identified through consensus clustering to reveal three clusters of round, grape-like, and stellate phenotypes. In some cases, cell lines with particular pathobiological phenotypes clustered together (e.g., ERBB2 amplified cell lines sharing the same morphometric properties as the grape-like phenotype). Next, associations with molecular features were realized through (i) differential analysis within each morphological cluster, and (ii) regression analysis across the entire panel of cell lines. In both cases, the dominant genes that are predictive of the morphological signatures were identified. Specifically, PPARγ has been associated with the invasive stellate morphological phenotype, which corresponds to triple-negative pathobiology. PPARγ has been validated through two supporting biological assays
    • …
    corecore